This work provides a Deep Reinforcement Learning approach to solving a periodic review inventory control system with stochastic vendor lead times, lost sales, correlated demand, and price matching. While this dynamic program has historically been considered intractable, our results show that several policy learning approaches are competitive with or outperform classical methods. In order to train these algorithms, we develop novel techniques to convert historical data into a simulator. On the theoretical side, we present learnability results on a subclass of inventory control problems, where we provide a provable reduction of the reinforcement learning problem to that of supervised learning. On the algorithmic side, we present a model-based reinforcement learning procedure (Direct Backprop) to solve the periodic review inventory control problem by constructing a differentiable simulator. Under a variety of metrics Direct Backprop outperforms model-free RL and newsvendor baselines, in both simulations and real-world deployments.
translated by 谷歌翻译
为了识别专业知识,预测者不应通过其校准评分来测试,这总是可以任意地使其缩小,而应通过其勃起得分进行测试。布里尔分数是校准得分和改进得分的总和;后者衡量了以相同的预测分类为垃圾箱的好成绩,因此证明了“专业知识”。这就提出了一个问题,即人们是否可以在不失去专业知识的情况下获得校准,我们称这是“量化”。我们提供了一种简单的方法,可以通过确定性的在线程序来计算任何预测。我们还表明,可以通过校准的随机过程来实现量化,然后将结果扩展到同时对多个过程进行定位,并确定不断校准的过程。
translated by 谷歌翻译
当前的论文研究在仅假定最佳值函数可线化的设置中,在设置中样本效率增强学习(RL)。最近已经理解,即使在这种看似强大的假设和对生成模型的访问下,最坏情况的样本复杂性也可能是庞大的(即指数)。我们研究了学习者还可以从专家政策中访问交互式演示的设置,并提出一种统计和计算上有效的算法(DELPHI),用于将探索与专家查询融合。特别是,Delphi需要$ \ tilde {\ Mathcal {o}}(d)$专家查询和$ \ texttt {poly} {poly}(d,h,h,| \ mathcal {a} |,1/\ varepsilon)$探索性样本可证明恢复$ \ varepsilon $ -suboptimal策略。与纯RL方法相比,这对应于样品复杂性的指数改善,而专家输入令人惊讶。与先前的模仿学习(IL)方法相比,我们所需的专家演示数量独立于$ h $和$ 1/\ varepsilon $的对数,而所有先前的工作至少需要两者的线性因素,除了对$ $ $ $的依赖性外, D $。为了确定所需的专家查询数量最少,我们表明,在同一环境中,任何其勘探预算是多项式限制的学习者(在$ d,h,$和$ | \ | \ MATHCAL {a} | $方面,需要至少$ \ tilde \ omega(\ sqrt {d})$ oracle调用以恢复与专家的价值函数竞争的策略。在较弱的假设中,专家的政策是线性的,我们表明下限将增加到$ \ tilde \ omega(d)$。
translated by 谷歌翻译
我们考虑了上下文匪徒的问题,其中Action是一个地面集的子集,均值奖励由属于$ \ Mathcal {F} $的未知单调子模块函数建模。我们允许将时变的Matroid约束放置在可行的集合上。假设使用后悔$ \ mathsf {reg}(\ mathcal {f})$访问Oracle,我们的算法根据逆间隙加权策略有效地随机随机化估计函数的局部最佳函数。我们展示了这种过程的累积遗憾了时间,以时间为单位$ N $尺度作为$ o(\ sqrt {n \ mathsf {reg}(\ mathcal {f})),乘以乘法因子$ 1/2 $的基准。另一方面,使用(filmus和ward 2014)的技术,我们展示了与当地随机化的$ \ epsilon $ -greedy程序率为$ o(n ^ {2/3} \ mathsf {reg}(\mathcal {f})^ {1/3})$较强大的$(1-e ^ { - 1})$基准。
translated by 谷歌翻译
随机梯度下降(SGD)在实践中表现出强烈的算法正则化效应,该效果已被认为在现代机器学习方法的概括中起着重要作用。在这项工作中,我们试图在线性回归的更简单环境(包括量身范围的和过度参数化的制度)中理解这些问题,在此,我们的目标是对(未注册)平均SGD与(未注册的)平均SGD进行基于实例的敏锐比较。脊回归的明确正规化。对于一系列最小二乘问题的问题实例(在高维设置中是自然的),我们显示:(1)对于每个问题实例和每个脊参数(未注册)SGD,当时提供比对数的样本比提供的样本更多的样本时对于脊算法,概括的概括不及脊解决方案(提供SGD使用调谐常数步骤); (2)相反,存在(在这个宽阔的问题类中),其中最佳调整的脊回归需要比SGD更高的样本以具有相同的概括性能。综上所述,我们的结果表明,在对数因素上,SGD的概括性能总是不到脊回归的差异,而在各种过度参数的问题中,对于某些问题实例,实际上可能会更好。更普遍地,我们的结果表明,即使在更简单(过度参数化)凸设置中,算法正则化如何产生重要的后果。
translated by 谷歌翻译
Supervised machine learning-based medical image computing applications necessitate expert label curation, while unlabelled image data might be relatively abundant. Active learning methods aim to prioritise a subset of available image data for expert annotation, for label-efficient model training. We develop a controller neural network that measures priority of images in a sequence of batches, as in batch-mode active learning, for multi-class segmentation tasks. The controller is optimised by rewarding positive task-specific performance gain, within a Markov decision process (MDP) environment that also optimises the task predictor. In this work, the task predictor is a segmentation network. A meta-reinforcement learning algorithm is proposed with multiple MDPs, such that the pre-trained controller can be adapted to a new MDP that contains data from different institutes and/or requires segmentation of different organs or structures within the abdomen. We present experimental results using multiple CT datasets from more than one thousand patients, with segmentation tasks of nine different abdominal organs, to demonstrate the efficacy of the learnt prioritisation controller function and its cross-institute and cross-organ adaptability. We show that the proposed adaptable prioritisation metric yields converging segmentation accuracy for the novel class of kidney, unseen in training, using between approximately 40\% to 60\% of labels otherwise required with other heuristic or random prioritisation metrics. For clinical datasets of limited size, the proposed adaptable prioritisation offers a performance improvement of 22.6\% and 10.2\% in Dice score, for tasks of kidney and liver vessel segmentation, respectively, compared to random prioritisation and alternative active sampling strategies.
translated by 谷歌翻译
Multi-agent artificial intelligence research promises a path to develop intelligent technologies that are more human-like and more human-compatible than those produced by "solipsistic" approaches, which do not consider interactions between agents. Melting Pot is a research tool developed to facilitate work on multi-agent artificial intelligence, and provides an evaluation protocol that measures generalization to novel social partners in a set of canonical test scenarios. Each scenario pairs a physical environment (a "substrate") with a reference set of co-players (a "background population"), to create a social situation with substantial interdependence between the individuals involved. For instance, some scenarios were inspired by institutional-economics-based accounts of natural resource management and public-good-provision dilemmas. Others were inspired by considerations from evolutionary biology, game theory, and artificial life. Melting Pot aims to cover a maximally diverse set of interdependencies and incentives. It includes the commonly-studied extreme cases of perfectly-competitive (zero-sum) motivations and perfectly-cooperative (shared-reward) motivations, but does not stop with them. As in real-life, a clear majority of scenarios in Melting Pot have mixed incentives. They are neither purely competitive nor purely cooperative and thus demand successful agents be able to navigate the resulting ambiguity. Here we describe Melting Pot 2.0, which revises and expands on Melting Pot. We also introduce support for scenarios with asymmetric roles, and explain how to integrate them into the evaluation protocol. This report also contains: (1) details of all substrates and scenarios; (2) a complete description of all baseline algorithms and results. Our intention is for it to serve as a reference for researchers using Melting Pot 2.0.
translated by 谷歌翻译
在随机对照试验中的治疗效果(TE)估计的客观评估中的中心障碍是缺乏地面真理(或验证集)来测试其表现。在本文中,我们提供了一种新的交叉验证样方法来解决这一挑战。我们程序的关键洞察力是嘈杂(但不偏不倚)差异估计可以用作RCT的一部分上的地面真理“标签”,以测试在另一部分培训的估计器的性能。我们将这种洞察力与聚集方案相结合,借助跨统计强度的大型RCT,以判断估计估计估计潜在治疗效果的能力的端到端方法。我们在亚马逊供应链中实施的709个RCT评估我们的方法。在Amazon的AB测试中,由于响应变量的重尾性,我们突出了与恢复治疗效果相关的独特困难。在这种重尾的设置中,我们的方法表明,积极低档或截断大值的程序,同时引入偏差降低了足以确保更准确地估计治疗效果的方差。
translated by 谷歌翻译
人工智能/机器学习方法的进步提供了在科学研究中具有广泛适用性的工具。这些技术正在跨越核物理研究主题的多样性,从而推进将有助于科学发现和社会应用。该审查提供了一种由人工智能和机器学习技术改变的核物理研究的快照。
translated by 谷歌翻译
尽管近期因因果推断领域的进展,迄今为止没有关于从观察数据的收集治疗效应估算的方法。对临床实践的结果是,当缺乏随机试验的结果时,没有指导在真实情景中似乎有效的指导。本文提出了一种务实的方法,以获得从观察性研究的治疗效果的初步但稳健地估算,为前线临床医生提供对其治疗策略的信心程度。我们的研究设计适用于一个公开问题,估算Covid-19密集护理患者的拳击机动的治疗效果。
translated by 谷歌翻译